561 research outputs found

    Barycentric Subspace Analysis on Manifolds

    Full text link
    This paper investigates the generalization of Principal Component Analysis (PCA) to Riemannian manifolds. We first propose a new and general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of k+1k+1 reference points. As this definition relies on points and not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows principal subspaces that span several strata, which is impossible in previous generalizations of PCA. We show that barycentric subspaces locally define a submanifold of dimension k which generalizes geodesic subspaces.Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). We show that the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the subspaces of the flag (AUV). Barycentric subspaces are naturally nested, allowing the construction of hierarchically nested subspaces. Optimizing the AUV criterion to optimally approximate data points with flags of affine spans in Riemannian manifolds lead to a particularly appealing generalization of PCA on manifolds called Barycentric Subspaces Analysis (BSA).Comment: Annals of Statistics, Institute of Mathematical Statistics, A Para\^itr

    Higher-Order Momentum Distributions and Locally Affine LDDMM Registration

    Full text link
    To achieve sparse parametrizations that allows intuitive analysis, we aim to represent deformation with a basis containing interpretable elements, and we wish to use elements that have the description capacity to represent the deformation compactly. To accomplish this, we introduce in this paper higher-order momentum distributions in the LDDMM registration framework. While the zeroth order moments previously used in LDDMM only describe local displacement, the first-order momenta that are proposed here represent a basis that allows local description of affine transformations and subsequent compact description of non-translational movement in a globally non-rigid deformation. The resulting representation contains directly interpretable information from both mathematical and modeling perspectives. We develop the mathematical construction of the registration framework with higher-order momenta, we show the implications for sparse image registration and deformation description, and we provide examples of how the parametrization enables registration with a very low number of parameters. The capacity and interpretability of the parametrization using higher-order momenta lead to natural modeling of articulated movement, and the method promises to be useful for quantifying ventricle expansion and progressing atrophy during Alzheimer's disease

    Power Euclidean metrics for covariance matrices with application to diffusion tensor imaging

    Full text link
    Various metrics for comparing diffusion tensors have been recently proposed in the literature. We consider a broad family of metrics which is indexed by a single power parameter. A likelihood-based procedure is developed for choosing the most appropriate metric from the family for a given dataset at hand. The approach is analogous to using the Box-Cox transformation that is frequently investigated in regression analysis. The methodology is illustrated with a simulation study and an application to a real dataset of diffusion tensor images of canine hearts

    Bures-Wasserstein minimizing geodesics between covariance matrices of different ranks

    Full text link
    The set of covariance matrices equipped with the Bures-Wasserstein distance is the orbit space of the smooth, proper and isometric action of the orthogonal group on the Euclidean space of square matrices. This construction induces a natural orbit stratification on covariance matrices, which is exactly the stratification by the rank. Thus, the strata are the manifolds of symmetric positive semi-definite (PSD) matrices of fixed rank endowed with the Bures-Wasserstein Riemannian metric. In this work, we study the geodesics of the Bures-Wasserstein distance. Firstly, we complete the literature on geodesics in each stratum by clarifying the set of preimages of the exponential map and by specifying the injection domain. We also give explicit formulae of the horizontal lift, the exponential map and the Riemannian logarithms that were kept implicit in previous works. Secondly, we give the expression of all the minimizing geodesic segments joining two covariance matrices of any rank. More precisely, we show that the set of all minimizing geodesics between two covariance matrices Σ\Sigma and Λ\Lambda is parametrized by the closed unit ball of R(kr)×(lr)\mathbb{R}^{(k-r)\times(l-r)} for the spectral norm, where k,l,rk, l, r are the respective ranks of Σ\Sigma, Λ\Lambda, Σ\SigmaΛ\Lambda. In particular, the minimizing geodesic is unique if and only if r=min(k,l)r = \min(k, l). Otherwise, there are infinitely many

    Stratified Principal Component Analysis

    Full text link
    This paper investigates a general family of models that stratifies the space of covariance matrices by eigenvalue multiplicity. This family, coined Stratified Principal Component Analysis (SPCA), includes in particular Probabilistic PCA (PPCA) models, where the noise component is assumed to be isotropic. We provide an explicit maximum likelihood and a geometric characterization relying on flag manifolds. A key outcome of this analysis is that PPCA's parsimony (with respect to the full covariance model) is due to the eigenvalue-equality constraint in the noise space and the subsequent inference of a multidimensional eigenspace. The sequential nature of flag manifolds enables to extend this constraint to the signal space and bring more parsimonious models. Moreover, the stratification and the induced partial order on SPCA yield efficient model selection heuristics. Experiments on simulated and real datasets substantiate the interest of equalising adjacent sample eigenvalues when the gaps are small and the number of samples is limited. They notably demonstrate that SPCA models achieve a better complexity/goodness-of-fit tradeoff than PPCA
    corecore